Category: Artificial intelligence

  • Mark Zuckerberg Plans New AI Bot Initiative for 2024

    Mark Zuckerberg Plans New AI Bot Initiative for 2024

    Key Takeaways

    1. Meta has introduced 28 new AI bots on WhatsApp, Facebook Messenger, and Instagram, leading to user complaints about their interactions.
    2. The bots can autonomously start conversations and make suggestions, such as movie recommendations.
    3. A follow-up messaging policy restricts bots from contacting users unless the user initiated the conversation and asked at least five questions in 14 days.
    4. Meta aims to enhance user engagement through these bots, potentially increasing ad visibility and revenue.
    5. It is unclear if Meta plans to monetize the AI chatbots or integrate them into its virtual reality platform, “Horizon Worlds.”


    Just when we thought the era of bothersome AI bots was behind us, Meta has introduced 28 AI bots across its WhatsApp, Facebook Messenger, and Instagram platforms over the past year. These bots began independently posting images and starting conversations with actual users, leading to a surge of complaints from users who found it impossible to block them. Meta attributed this issue to a glitch.

    New Developments in AI Bots

    As reported by Business Insider, Mark Zuckerberg is revisiting the use of AI bots. Internal documents from Alignerr, a company that focuses on AI training, indicate that Meta Platforms is developing new AI bots. These chatbots are designed to interact with users autonomously on services like Messenger, WhatsApp, and Instagram in the near future. They will be capable of starting their own conversations, such as suggesting movies and offering relevant recommendations.

    Follow-Up Messaging Policy

    Meta has confirmed to TechCrunch that there is a follow-up messaging protocol for AI bots, which only takes place under specific circumstances. The bot can only reach out to the user again if the user previously started a conversation. Additionally, the user must have asked the AI at least five questions within a span of 14 days. Only then can the bot engage in further follow-up communication.

    Meta reassures that no repeated messages will be sent. If the user does not reply to the initial inquiry, no further attempts will be made. This approach aims to ensure that interactions with AI are not overwhelming. The testing also indicates that the bots can remember chat content for a limited duration, allowing retrieval of information from earlier conversations within that timeframe.

    Engaging Users and Business Implications

    A spokesperson for Meta explained that this feature aims to assist users in discovering more engaging topics and promoting deeper interactions with AI. Users who remain on the platform longer tend to see more ads, which in turn boosts the company’s business model.

    Nevertheless, it remains uncertain whether Meta intends to monetize its AI chatbots. The company has not clarified whether sponsored content or paid responses will be part of its strategy. Furthermore, it is unclear if these chatbots will find a place in Meta’s virtual reality environment, “Horizon Worlds”, in the future.

    Source:
    Link

  • Google Signs 200 MW Corporate Fusion Power Deal with Commonwealth

    Google Signs 200 MW Corporate Fusion Power Deal with Commonwealth

    Key Takeaways

    1. Google signed a corporate power-purchase agreement for 200 megawatts of fusion energy from the upcoming ARC plant in Virginia, developed by Commonwealth Fusion Systems.
    2. This agreement highlights the increasing demand for energy due to AI workloads, with Google consuming 30.8 million MWh of electricity in 2023, primarily for data centers.
    3. Commonwealth Fusion Systems aims to achieve commercial fusion operation in the early 2030s, using high-temperature superconducting magnets for plasma confinement.
    4. Google has been a financial supporter of CFS since 2021, and this agreement strengthens their partnership while encouraging other fusion startups to pursue corporate clients.
    5. Despite challenges in achieving practical fusion energy, commitments from companies like Google suggest a potential shift towards clean baseload electricity for the digital economy.


    Google has made a significant move by signing the first direct corporate power-purchase agreement for fusion energy. The tech giant has committed to purchasing 200 megawatts from the upcoming ARC plant, which is being developed by Commonwealth Fusion Systems in Virginia. Once operational, the ARC facility is expected to provide a total of 400 MW, sufficient to power a large group of data centers located in one of the busiest server corridors in the world. The financial specifics of the deal have not been made public.

    Rising Demand for Energy

    This agreement underscores the growing clash between the rising demands of artificial intelligence workloads and the need for electrical power. According to Google’s sustainability report for 2024, the company consumed a whopping 30.8 million MWh of electricity in the previous year, which is double the amount used in 2020. A staggering 96 percent of this energy was utilized by data centers. To expand this energy footprint while still adhering to decarbonization goals, innovative energy generation technologies are essential.

    About Commonwealth Fusion Systems

    Commonwealth Fusion Systems, which was established as a spin-off from MIT in 2018, is working on a tokamak design that utilizes high-temperature superconducting magnets to confine plasma. In 2022, the Lawrence Livermore National Laboratory managed to achieve a short net-energy gain with lasers, but no fusion project has yet been able to reach a continuous state of “engineering break-even.” CFS plans to showcase commercial operation in the early 2030s, with ARC serving as both a testing platform and a source of initial revenue.

    Investment and Future Prospects

    Google has been a financial backer of CFS since a funding round in 2021 that raised $1.8 billion. This new power-offtake agreement strengthens their partnership and sends a clear message to other fusion startups looking to attract corporate clients. In 2023, Microsoft entered into a similar but smaller agreement with Helion Energy, indicating that major cloud providers are ready to take risks on energy projects to secure long-term, zero-carbon energy sources.

    Fusion energy still has to overcome tough physical and engineering challenges, including the need for continuous plasma confinement, materials that can withstand neutron bombardment, and making the economics of fusion plants competitive with renewables and energy storage. However, the scale of commitments from companies like Google, along with the momentum in public sector research, indicates that a practical fusion energy contribution is becoming more plausible, even if it remains uncertain. If ARC meets its goals, it could signify a major shift in how the digital economy acquires clean baseload electricity.

    Source:
    Link

  • Nvidia Hits $3.92T Valuation, Surpassing Microsoft and Apple

    Nvidia Hits $3.92T Valuation, Surpassing Microsoft and Apple

    Key Takeaways

    1. Nvidia’s market value is rising due to intense competition in the AI industry, with many companies seeking AI solutions.
    2. Nvidia focuses on supplying chips and AI models rather than direct consumer applications or chatbots.
    3. The company’s Nemotron-4 (340B) model is used for generating synthetic data to train large language models (LLMs).
    4. Nvidia recently became the most valuable publicly traded company with a market cap of $3.34 trillion, reflecting significant growth.
    5. BlackRock, the largest asset management firm, manages over $11.5 trillion in assets but has a market value of about $182.6 billion.


    The rise in Nvidia’s market value is driven by the ongoing competition in the AI industry, where numerous companies are striving for a piece of the AI pie. Nvidia is essentially arming all parties involved in this intense battle. While Nvidia has developed various AI models, it does not focus on direct consumer applications or creating chatbots for everyday tasks. One notable model from Nvidia, the Nemotron‑4 (340B), is utilized for producing synthetic data that aids in training LLMs.

    Competition and Cash Flow

    With tech giants racing to construct the most sophisticated AI data centers, develop state-of-the-art models, and seize control of the market, Nvidia is profiting immensely. The company provides the majority of chips necessary for these endeavors.

    Record-Breaking Valuation

    Only two weeks ago, Nvidia achieved the status of the most valuable publicly traded company, boasting a market cap of $3.34 trillion. This remarkable milestone signifies a ~17.4% increase from the $3.34 trillion valuation mentioned just two weeks prior.

    If the AI competition continues to escalate, Team Green is poised to maintain its profitable trajectory. An interesting read is “Automated Stock Trading Systems” (currently priced at $20.47 on Amazon), authored by Laurens Bensdorp, which aims to guide readers on trading in the stock market.

    Asset Management Insights

    An intriguing fact to note is that BlackRock, the largest asset management firm globally, oversees more than $11.5 trillion in assets, yet its market value is around $182.6 billion.

    Source:
    Link


     

  • AI Showdown: Grok Impresses Mrwhosetheboss, ChatGPT Triumphs

    AI Showdown: Grok Impresses Mrwhosetheboss, ChatGPT Triumphs

    Key Takeaways

    1. Grok performed well initially but struggled before finishing second to ChatGPT.
    2. ChatGPT and Gemini had an advantage with a video generation feature not available to other models.
    3. In a real-world problem-solving task, Grok gave the most direct answer, while Perplexity struggled with confusion.
    4. In cake-making challenges, Grok correctly identified the odd item, while other models misidentified it.
    5. All models experienced “hallucinations,” confidently stating incorrect information during various tests.


    In a recent video, Mrwhosetheboss put various AI models to the test, including Grok (Grok 3), Gemini (2.5 Pro), ChatGPT (GPT-4o), and Perplexity (Sonar Pro). Throughout the video, he shared his admiration for Grok’s performance. Initially, Grok performed really well but then struggled a bit before regaining its strength and ended up in second place behind ChatGPT. It’s important to note that ChatGPT and Gemini received an advantage due to a feature that the other models did not have — video generation.

    Testing Real-World Problem Solving

    To start the evaluation, Mrwhosetheboss examined the AI models’ ability to solve real-world problems. He presented each model with the following prompt: “I drive a Honda Civic 2017, how many of the Aerolite 29″ Hard Shell (79x58x31cm) suitcases would I be able to fit in the boot?” Grok gave the most direct answer, stating “2”. ChatGPT and Gemini suggested that theoretically, it could fit 3, but realistically, it would be 2. On the other hand, Perplexity got confused and, after doing simple math, mistakenly concluded that it could fit “3 or 4” without considering the suitcase’s shape.

    Challenging Cake-Making Skills

    Next, Mrwhosetheboss didn’t hold back as he asked the chatbots for cake-making advice. He also included an image of five items, one of which was out of place for baking — a jar of dried Porcini mushrooms. Most of the models fell for this ruse. ChatGPT misidentified it as a jar of ground mixed spice, Gemini thought it was crispy fried onions, and Perplexity guessed it was instant coffee. Grok, however, correctly recognized it as a jar of dried mushrooms from Waitrose. Here is the image he provided:

    Universal Hallucinations

    Continuing with the testing, he challenged the AIs with math, product suggestions, accounting, language translations, logical reasoning, and more. A common issue across all the models was hallucination; each of them showed some degree of this phenomenon at various points in the video, confidently discussing things that simply weren’t real. By the end, here’s how each AI ranked:

    Artificial intelligence has significantly eased many tasks, especially since the inception of LLMs. The book “Artificial Intelligence” (currently priced at $19.88 on Amazon) aims to help individuals make the most of AI tools.

    Source:
    Link

  • Microsoft Shifts Focus to AI Agents Amid Mass Layoffs

    Microsoft Shifts Focus to AI Agents Amid Mass Layoffs

    Key Takeaways

    1. Microsoft is laying off 9,000 workers across various departments, framed as organizational adjustments.
    2. CEO Satya Nadella revealed that about 30% of Microsoft’s code is now generated by AI, indicating a push for AI integration.
    3. There are concerns about employee morale, with some expressing frustration over job cuts despite the company’s profitability.
    4. The memo from management suggested that the layoffs aim to increase “agility and effectiveness,” hinting at a potential replacement of human roles with AI.
    5. While Microsoft has not officially linked AI agents to the layoffs, the trend of using AI in development is evident in projects and internal initiatives.


    Microsoft is currently undergoing another significant wave of layoffs, letting go of 9,000 workers across different departments. Although the company’s leadership refers to these layoffs as organizational adjustments, a recent report from a developer familiar with the situation suggests that Microsoft aims to substitute its human employees with artificial intelligence.

    AI Integration on the Rise

    At the LlamaCon AI developer event held by Meta in May, Microsoft CEO Satya Nadella noted that around 30% of the code produced by the company is now generated by AI. It appears that Microsoft is accelerating its AI integration efforts, which may be linked to the recent job cuts. An Engadget report quotes a developer at Microsoft, who received a memo from Phil Spencer but was not among those laid off, indicating that management is “trying their hardest to replace as many jobs as they can with AI agents.”

    Concerns About Employee Morale

    While the report didn’t go into much detail about this assertion, the memo did mention “increase agility and effectiveness” as one of the reasons for the layoffs. If AI agents are properly utilized, they have the potential to greatly streamline various processes, lending credibility to the developer’s claim that Microsoft is striving to replace human roles with AI technology. This trend isn’t isolated, as even Amazon has recently communicated to its staff that AI agents will take over certain human job functions.

    The developer also expressed frustration over the memo and the overall state of development at Microsoft. “I’m personally really angry that Phil’s email to us highlighted that this was the most profitable year ever for Xbox while also announcing layoffs. I wasn’t clear on what part of that I was meant to feel proud about,” the developer reportedly told Engadget. They further mentioned that employees are dissatisfied with the product quality and that there’s a lot of motivational talk being used to boost morale.

    The Future of AI at Microsoft

    It’s important to highlight that Microsoft has not directly connected AI agents to the layoffs. However, given the prevalence of AI-generated code, the use of generative AI in projects like Call of Duty: Black Ops 6, and other internal AI initiatives, it seems that AI agents are a key objective for the company.

    Source:
    Link

  • AI Prefers Fresh Content: Study Shows Recency Boosts Visibility

    AI Prefers Fresh Content: Study Shows Recency Boosts Visibility

    Key Takeaways

    1. Fresh content is becoming crucial for SEO as large language models (LLMs) favor newer articles.
    2. Changing publication dates may positively impact Google rankings, contrary to previous beliefs.
    3. Content recency’s importance varies by topic; rapidly changing fields like finance prioritize the latest information.
    4. Older, high-quality content can still be effective if regularly updated and maintained.
    5. Trustworthiness and relevance are important factors considered by LLMs, alongside content freshness.


    In the age of ChatGPT, Perplexity, and Google AI Overviews, a traditional SEO element is returning: the freshness of content. A fresh study from Seer Interactive indicates that large language models (LLMs) prefer newer content rather than older pieces. This trend of recentness has important consequences for those creating content strategies.

    The Myth of Publication Dates

    For many years, it was thought to be a myth in SEO that just changing the publication date could elevate a page’s ranking in Google. However, in the time of LLMs, there might be some validity to this idea. A recent analysis looked at over 5,000 URLs from log files of AI tools including ChatGPT, Perplexity, and Google AI Overviews, examining the relationship between publication dates and visibility. The findings are quite remarkable: 89% of the content identified by LLMs was released between 2023 and 2025, whereas only 6% of interactions with AI involved materials that were older than six years.

    Topic Variability in Content Recency

    The impact of recency differs depending on the subject matter. In rapidly changing areas like finance, there is a high demand for the latest information – content that was published before 2020 is seldom shown. The travel sector also has a tendency to highlight more current pieces. On the other hand, more stable fields, such as energy or DIY topics like building a patio, still allow for older, well-crafted articles to be featured by AI systems.

    In fast-evolving industries like finance or technology, being visible means needing regular updates and a continuous flow of new content. This is where the recency of content becomes very important – new or recently updated pieces usually rank better in AI systems. However, in more stable areas, having lasting evergreen content can still work well, as long as it continues to be of high quality and relevant.

    Enhancing Visibility of Older Content

    Older, high-quality articles still have worth – well-thought-out updates can greatly improve their visibility. To keep a strong presence in AI overviews and chatbot results, it is crucial to continuously update content while maintaining its depth and detail. Although large language models generally prefer recent pieces, they also consider factors like trustworthiness and relevance.

    Source:
    Link

  • AI Disrupts Entry-Level Job Market in the UK

    AI Disrupts Entry-Level Job Market in the UK

    Key Takeaways

    1. AI growth is significantly impacting the job market, leading to concerns about job replacement for entry-level positions.
    2. There has been a 32% decrease in entry-level job openings in the UK since the introduction of ChatGPT in 2022.
    3. Entry-level positions are expected to shrink to 25% of the overall UK job market by 2025.
    4. Major companies like Microsoft and Google are rapidly integrating AI into their operations, increasing reliance on technology.
    5. Experts warn that governments need to act now to manage AI regulations and mitigate potential unemployment during this technological shift.


    The rapid growth of AI in recent years is beginning to change the industrial scene in a major way. Nonetheless, experts have raised concerns that businesses will increasingly replace human labor with AI technology. For example, Dario Amodei, the CEO of Anthropic, has cautioned that AI may lead to a 50% reduction in entry-level positions within the next five years.

    Job Market Shift

    Amodei’s forecasts now appear to be alarmingly correct, as Adzuna, a job search platform based in the UK, has noted a significant decline in job openings for entry-level positions. According to their research, ever since ChatGPT was introduced in 2022, the UK job market has experienced a 32% decrease in new “graduate jobs, apprenticeships, internships, and junior positions that don’t require a degree,” as reported by The Guardian.

    Decline in Opportunities

    This fall in entry-level job vacancies is also said to have led to a shrinking of these positions to just 25% of the overall UK job market by 2025, marking a 4% drop from 2022 figures.

    Adzuna’s results seem to align with findings from Indeed. The Guardian cites Indeed’s data, stating that university graduates in the UK are facing the “most challenging job market since 2018,” with “advertised roles for recent grads having decreased by 33% in mid-June compared to last year.”

    The Unstoppable AI

    The ongoing advancement of AI is unlikely to slow down, as companies of all sizes are quickly adapting to integrate more AI into their teams. For instance, Microsoft is already creating 20-30% of its code using AI, while Google is producing even more.

    In light of this, Dario Amodei’s statement that “you can’t just step in front of the train and stop it” is quite relevant. According to Dario, the best we can do is “steer the train—steer it 10 degrees in a different direction from where it was going. That’s possible, but we need to act now.”

    It is uncertain how governments globally will manage AI regulations to ensure that the temporary surge in unemployment that often accompanies the beginning of a new “Industrial Revolution” doesn’t inflict as much pain this time around.

    Source:
    Link

  • Humanoid Robots Compete in Unique Soccer Tournament Showcase

    Humanoid Robots Compete in Unique Soccer Tournament Showcase

    Key Takeaways

    1. Humanoid robots participated in a fully autonomous soccer tournament at the Smart E-Sports Center in Beijing, highlighting advancements in AI technology.
    2. Four university teams programmed identical T1 robots to compete, showcasing their unique AI algorithms in direct matches.
    3. The tournament featured a 3-on-3 format with two ten-minute halves and a halftime, marking a significant departure from traditional human-influenced matches.
    4. While entertaining, the robots displayed clumsy movements, often stumbling, but the event celebrated their ability to complete matches independently.
    5. The success of this tournament suggests a future where friendly games between humans and robots could become a reality.


    Beijing, June 28, 2025 – In a groundbreaking event, humanoid robots took part in a fully autonomous soccer tournament held at the Smart E-Sports Center in Beijing. Four teams from universities were involved – Tsinghua University (THU Robotics), China Agricultural University (Mountain Sea), Beijing Information Science & Technology University (Blaze), and another Tsinghua team (Power Lab). Each team programmed identical T1 robots from Booster Robotics using their own AI algorithms to compete in direct matches.

    Unique Tournament Format

    This event wasn’t entirely without precedent, as humanoid robots have been participating in soccer at RoboCup events for years. What set this tournament apart was the total lack of human interference. The games were structured in a 3-on-3 format, featuring two halves of ten minutes each and a halftime of five minutes. In the championship match, THU Robotics triumphed over Mountain Sea with a score of 5–3, claiming the first title.

    Entertaining Yet Clumsy

    The matches leaned more towards being entertaining rather than showcasing top-tier sport. The 45-kilogram (99-pound) robots often stumbled, bumped into each other, and sometimes needed assistance to get back on their feet when they fell. Nevertheless, the audience of about 300 erupted in cheers for every successful play and save, viewing it as a sign of genuine advancement.

    Despite their robotic nature, the players still move in a way that’s more akin to penguins on ice than to elite athletes like Kylian Mbappé. However, the ability to complete full matches without human control signifies a major achievement: AI-powered machines are now capable of handling complex tasks in real-time. If this progress continues, we might soon see the next exciting development – friendly games between humans and robots could be on the horizon.

    Source:
    Link

  • Apple Drops Siri for Private ChatGPT and Claude AI Models

    Apple Drops Siri for Private ChatGPT and Claude AI Models

    Key Takeaways

    1. Apple is negotiating with OpenAI and Anthropic to use their large language models for Siri after struggling with its own AI development.
    2. The launch of Siri’s upgrades has been delayed twice until 2026, causing frustration within Apple’s AI team.
    3. Leadership changes were made in Apple’s AI team following the failure of Siri to drive iPhone upgrades and delays in the internal LLM.
    4. Apple found that Anthropic’s Claude LLM performed better than its Foundation Models for enhancing Siri’s capabilities.
    5. Privacy remains a top concern, with Apple exploring options to run third-party models on its own servers to maintain user privacy.


    After a time of testing its own Siri AI against competitors like ChatGPT, Claude, and Google Gemini, Apple is reportedly ready to give up on its Foundation Models.

    Talks with OpenAI and Anthropic

    The company is currently negotiating with OpenAI and Anthropic to utilize their large language models (LLMs) for the AI features it promised for Siri since introducing Apple Intelligence nine months ago.

    Sadly, Apple has struggled to implement the Siri AI upgrade using its own LLM, even on its most advanced devices like the iPhone 16 Pro Max, and has postponed it twice until 2026. The AI team seems to be either disheartened by the unclear guidance or attracted by the huge paychecks that companies like Meta and OpenAI offer to lure away their engineers.

    Changes in Leadership

    When the mostly unoriginal Apple Intelligence features failed to drive an upgrade cycle for the iPhone and the internal Siri LLM faced delays, Apple decided to replace the leader of its AI team and initiated a performance review. The findings suggested that the Siri AI capabilities it intended to deliver would be better served by Anthropic’s Claude LLM instead of its Foundation Models.

    Next in line for performance was ChatGPT, which is why Apple is in discussions with both Anthropic and OpenAI to establish a partnership to enhance Siri with their chatbot technologies.

    Privacy Concerns

    Apple’s primary concern when considering a third-party service is the privacy of iPhone users. It has explored the option of running Anthropic or OpenAI’s code on its own Private Cloud Compute server clusters, asking them to create tailored ChatGPT and Claude models that would allow Apple to maintain control over the privacy settings for future Siri AI users.

    Source:
    Link

  • Cloudflare Blocks Unpaid AI Web Scrapers from Accessing Data

    Cloudflare Blocks Unpaid AI Web Scrapers from Accessing Data

    Key Takeaways

    1. Cloudflare’s CEO Matthew Prince announced that all AI web crawler bots will be blocked by default to protect content creators.
    2. The online search environment is increasingly dominated by AI chatbots, making it harder for content creators to gain traffic and recognition for their work.
    3. AI crawlers are extracting data without compensating original content creators, leading to a sense of unfairness in the web ecosystem.
    4. Cloudflare plans to launch a marketplace to connect content creators with AI companies, focusing on content quality and knowledge enhancement.
    5. Recent disruptions caused by aggressive AI crawlers have led platforms like SourceHut to block major cloud service providers due to excessive traffic.


    Declaring “Content Independence Day,” Cloudflare’s CEO Matthew Prince shared significant updates to the company’s web service system. From now on, all AI web crawler bots will be blocked by default.

    In a blog entry, Prince explained how the current online search environment is dominated by AI chatbots, like Google’s Gemini and OpenAI’s ChatGPT. While these tools provide value, they also extract data from the internet without any consequences, neglecting to reward the original content creators.

    Challenges for Content Creators

    Prince pointed out that recent modifications in Google Search have made it ten times “more difficult for a content creator to get the same volume of traffic” as they did a decade ago.

    He stated, “Instead of being a fair trade, the web is being stripmined by AI crawlers, with content creators seeing almost no traffic and thus almost no value.”

    Prince expressed that the content being scraped serves as “the fuel that powers AI engines,” and it is only just that the original creators receive compensation for their work.

    New Marketplace Initiative

    Cloudflare also unveiled plans for a new marketplace designed to connect creators with AI companies. This marketplace will evaluate available content not just based on the traffic it brings in but also “on how much it furthers knowledge.” Prince is optimistic that this will help AI engines improve swiftly, potentially ushering in a new golden age of high-quality content creation.

    He acknowledged that he doesn’t have all the solutions right now, but the company is collaborating with “leading computer scientists and economists to find them.”

    Recent Issues with AI Crawlers

    Recently, SourceHut, a platform for hosting open-source Git repositories, reported disruptions caused by “aggressive LLM crawlers.” They have blocked multiple cloud service providers, including Google Cloud and Microsoft Azure, due to the overwhelming traffic coming from their networks.

    In January, DoubleVerify, a web analytics platform, noted an 86% rise in General Invalid Traffic (GIVT) from AI scrapers and other automated tools compared to 2024.

    Despite previous commitments, OpenAI’s GPTbot has also discovered methods to ignore or bypass a site’s robots.txt file entirely, leading to an enormous increase in traffic for domain owners and potentially high costs.

    Source:
    Link